124 research outputs found

    Optimization and sensitivity analysis of computer simulation models by the score function method

    Get PDF
    Experimental Design;Simulation;Optimization;Queueing Theory

    Polynomial Time Algorithms for Estimation of Rare Events in Queueing Models

    Get PDF
    This paper presents a framework for analyzing time complexity of rare events estimators for queueing models. In particular it deals with polynomial and exponential time switching regenerative (SR) estimators for the steady-state probabilities of excessive backlog in the GI=GI=1 queue, and some of its extensions. The SR estimators are based on large deviation theory and exponential change of measure, which is parametrized by a scalar t. We show how to find the optimal value w of the parameter t, which leads to the optimal exponential change of measure (OECM), and we find conditions under which the OECM generates polynomial time estimators. We, finally, investigate the "robustness" of the proposed SR estimators, in the sense that we find how much one can perturb the optimal value w in the OECM such that the SR estimator still leads to dramatic variance reduction and still is useful in practice. Our extensive numerical results suggest that if the optimal parameter value w is perturbed up to 20%, we only lose 2--3 orders of magnitude of variance reduction compared to the orders of tenth under the optimal value w.

    Variance Reduction Techniques in Monte Carlo Methods

    Get PDF
    Monte Carlo methods are simulation algorithms to estimate a numerical quantity in a statistical model of a real system. These algorithms are executed by computer programs. Variance reduction techniques (VRT) are needed, even though computer speed has been increasing dramatically, ever since the introduction of computers. This increased computer power has stimulated simulation analysts to develop ever more realistic models, so that the net result has not been faster execution of simulation experiments; e.g., some modern simulation models need hours or days for a single ’run’ (one replication of one scenario or combination of simulation input values). Moreover there are some simulation models that represent rare events which have extremely small probabilities of occurrence), so even modern computer would take ’for ever’ (centuries) to execute a single run - were it not that special VRT can reduce theses excessively long runtimes to practical magnitudes.common random numbers;antithetic random numbers;importance sampling;control variates;conditioning;stratied sampling;splitting;quasi Monte Carlo

    Monte Carlo sampling and variance reduction techniques

    Get PDF

    Variance Reduction Techniques in Monte Carlo Methods

    Get PDF

    Adaptive Importance Sampling in General Mixture Classes

    Get PDF
    In this paper, we propose an adaptive algorithm that iteratively updates both the weights and component parameters of a mixture importance sampling density so as to optimise the importance sampling performances, as measured by an entropy criterion. The method is shown to be applicable to a wide class of importance sampling densities, which includes in particular mixtures of multivariate Student t distributions. The performances of the proposed scheme are studied on both artificial and real examples, highlighting in particular the benefit of a novel Rao-Blackwellisation device which can be easily incorporated in the updating scheme.Comment: Removed misleading comment in Section
    • …
    corecore